了解训练算法的隐性偏见对于解释过多散热性神经网络的成功至关重要。在本文中,我们研究了标签噪声在通过其连续时间版本的四次参数化模型的训练动力学中的作用。我们明确表征由随机流选择的解决方案,并证明它隐含地解决了套索程序。为了充分完成我们的分析,我们为动力学提供非沉积收敛保证以及支持恢复的条件。我们还提供了支持我们理论主张的实验结果。我们的发现强调了一个事实,即结构化噪声可以引起更好的概括,并有助于解释在实践中观察到的随机动力学的更大性能。
translated by 谷歌翻译
清晰度感知最小化(SAM)是一种最近的训练方法,它依赖于最严重的重量扰动,可显着改善各种环境中的概括。我们认为,基于pac-bayes概括结合的SAM成功的现有理由,而收敛到平面最小值的想法是不完整的。此外,没有解释说在SAM中使用$ m $ sharpness的成功,这对于概括而言至关重要。为了更好地理解SAM的这一方面,我们理论上分析了其对角线性网络的隐式偏差。我们证明,SAM总是选择一种比标准梯度下降更好的解决方案,用于某些类别的问题,并且通过使用$ m $ -sharpness可以放大这种效果。我们进一步研究了隐性偏见在非线性网络上的特性,在经验上,我们表明使用SAM进行微调的标准模型可以导致显着的概括改进。最后,当与随机梯度一起使用时,我们为非凸目标提供了SAM的收敛结果。我们从经验上说明了深层网络的这些结果,并讨论了它们与SAM的概括行为的关系。我们的实验代码可在https://github.com/tml-epfl/understanding-sam上获得。
translated by 谷歌翻译
多任务学习利用了多个任务之间的结构相似性,尽管样本很少。由应用于数据筛选任务的神经网络的最新成功激励,我们考虑了线性低维共享表示模型。尽管有广泛的文献,但现有的理论结果要么保证估计率较弱,要么每个任务都需要大量样本。当每个任务的样本数量较小时,这项工作提供了针对跟踪规范正规化估计器的第一个估计误差。学习数据筛选任务的痕量规范正规化的优点扩展到元学习,并在合成数据集上得到证实。
translated by 谷歌翻译
联合学习的个性化可以通过交易模型的偏差来提高用户模型的准确性(通过使用来自可能不同)的数据引入的数据来抵消其方差(由于任何单个用户的数据量有限)。为了开发最佳地平衡此权衡的培训算法,有必要扩展我们的理论基础。在这项工作中,我们将个性化协作学习问题正式,作为用户目标$ f_0(x)$的随机优化,同时获得对N $相关但其他用户的不同目标$ \ {f_1(x),\ dots,f_n (x)\} $。我们在此设置中为两个算法提供收敛保证 - 一种名称的个性化方法,称为\ emph {加权梯度平均},以及一种新颖的\ emph {偏压校正}方法 - 以及我们可以最佳地折衷的条件偏差减少方差并实现线性加速WRT \用户数量$ N $。此外,我们还经验验证他们的表现,证实了我们的理论见解。
translated by 谷歌翻译
了解培训算法的隐含偏差至关重要,以解释过度分化的神经网络的成功。在本文中,我们通过连续时间版本,即随机梯度流来研究对对角线线性网络的随机梯度下降的动态。我们明确地表征了随机流动选择的解决方案,并证明它总是享有比梯度流量更好的泛化特性。令人惊讶的是,我们表明训练损失的收敛速度控制了偏置效果的大小:收敛速度较慢,偏置越好。要完全完成我们的分析,我们提供动态的收敛保证。我们还提供了支持我们的理论索赔的实验结果。我们的研究结果强调了结构化噪音可以引起更好的概括,并且它们有助于解释在梯度下降的随机梯度下降方面观察到的更大表现。
translated by 谷歌翻译
对共同腐败的稳健性的文献表明对逆势培训是否可以提高这种环境的性能,没有达成共识。 First, we show that, when used with an appropriately selected perturbation radius, $\ell_p$ adversarial training can serve as a strong baseline against common corruptions improving both accuracy and calibration.然后,我们解释了为什么对抗性训练比具有简单高斯噪声的数据增强更好地表现,这被观察到是对共同腐败的有意义的基线。与此相关,我们确定了高斯增强过度适用于用于培训的特定标准偏差的$ \ sigma $ -oviting现象,这对培训具有显着不利影响的普通腐败精度。我们讨论如何缓解这一问题,然后如何通过学习的感知图像贴片相似度引入对抗性训练的有效放松来进一步增强$ \ ell_p $普发的培训。通过对CiFar-10和Imagenet-100的实验,我们表明我们的方法不仅改善了$ \ ell_p $普发的培训基线,而且还有累积的收益与Augmix,Deepaulment,Ant和Sin等数据增强方法,导致普通腐败的最先进的表现。我们的实验代码在HTTPS://github.com/tml-epfl/adv-training - 窗子上公开使用。
translated by 谷歌翻译
作为研究界,我们仍然缺乏对对抗性稳健性的进展的系统理解,这通常使得难以识别训练强大模型中最有前途的想法。基准稳健性的关键挑战是,其评估往往是出错的导致鲁棒性高估。我们的目标是建立对抗性稳健性的标准化基准,尽可能准确地反映出考虑在合理的计算预算范围内所考虑的模型的稳健性。为此,我们首先考虑图像分类任务并在允许的型号上引入限制(可能在将来宽松)。我们评估了与AutoAtrack的对抗鲁棒性,白和黑箱攻击的集合,最近在大规模研究中显示,与原始出版物相比,改善了几乎所有稳健性评估。为防止对自动攻击进行新防御的过度适应,我们欢迎基于自适应攻击的外部评估,特别是在自动攻击稳健性潜在高估的地方。我们的排行榜,托管在https://robustbench.github.io/,包含120多个模型的评估,并旨在反映在$ \ ell_ \ infty $的一套明确的任务上的图像分类中的当前状态 - 和$ \ ell_2 $ -Threat模型和共同腐败,未来可能的扩展。此外,我们开源源是图书馆https://github.com/robustbench/robustbench,可以提供对80多个强大模型的统一访问,以方便他们的下游应用程序。最后,根据收集的模型,我们分析了稳健性对分布换档,校准,分配检测,公平性,隐私泄漏,平滑度和可转移性的影响。
translated by 谷歌翻译
We propose the Square Attack, a score-based black-box l2and l∞-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking. Square Attack is based on a randomized search scheme which selects localized squareshaped updates at random positions so that at each iteration the perturbation is situated approximately at the boundary of the feasible set. Our method is significantly more query efficient and achieves a higher success rate compared to the state-of-the-art methods, especially in the untargeted setting. In particular, on ImageNet we improve the average query efficiency in the untargeted setting for various deep networks by a factor of at least 1.8 and up to 3 compared to the recent state-ofthe-art l∞-attack of Al-Dujaili & OReilly (2020). Moreover, although our attack is black-box, it can also outperform gradient-based white-box attacks on the standard benchmarks achieving a new state-of-the-art in terms of the success rate. The code of our attack is available at https://github.com/max-andr/square-attack.
translated by 谷歌翻译
This article formulates a generic representation of a path-following controller operating under contained motion, which was developed in the context of surgical robotics. It reports two types of constrained motion: i) Bilateral Constrained Motion, also called Remote Center Motion (RCM), and ii) Unilaterally Constrained Motion (UCM). In the first case, the incision hole has almost the same diameter as the robotic tool. In contrast, in the second state, the diameter of the incision orifice is larger than the tool diameter. The second case offers more space where the surgical instrument moves freely without constraints before touching the incision wall. The proposed method combines two tasks that must operate hierarchically: i) respect the RCM or UCM constraints formulated by equality or inequality, respectively, and ii) perform a surgical assignment, e.g., scanning or ablation expressed as a 3D path-following task. The proposed methods and materials were tested first on our simulator that mimics realistic conditions of middle ear surgery, and then on an experimental platform. Different validation scenarios were carried out experimentally to assess quantitatively and qualitatively each developed approach. Although ultimate precision was not the goal of this work, our concept is validated with enough accuracy (inferior to 100 micrometres) for ear surgery.
translated by 谷歌翻译
Several self-supervised representation learning methods have been proposed for reinforcement learning (RL) with rich observations. For real-world applications of RL, recovering underlying latent states is crucial, particularly when sensory inputs contain irrelevant and exogenous information. In this work, we study how information bottlenecks can be used to construct latent states efficiently in the presence of task-irrelevant information. We propose architectures that utilize variational and discrete information bottlenecks, coined as RepDIB, to learn structured factorized representations. Exploiting the expressiveness bought by factorized representations, we introduce a simple, yet effective, bottleneck that can be integrated with any existing self-supervised objective for RL. We demonstrate this across several online and offline RL benchmarks, along with a real robot arm task, where we find that compressed representations with RepDIB can lead to strong performance improvements, as the learned bottlenecks help predict only the relevant state while ignoring irrelevant information.
translated by 谷歌翻译